16 research outputs found

    Beyond the lens : communicating context through sensing, video, and visualization

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 101-103).Responding to rapid growth in sensor network deployments that outpaces research efforts to understand or relate the new data streams, this thesis presents a collection of interfaces to sensor network data that encourage open-ended browsing while emphasizing saliency of representation. These interfaces interpret, visualize, and communicate context from sensors, through control panels and virtual environments that synthesize multimodal sensor data into interactive visualizations. This work extends previous efforts in cross-reality to incorporate augmented video as well as complex interactive animations, making use of sensor fusion to saliently represent contextual information to users in a variety of application domains, from building information management to real-time risk assessment to personal privacy. Three applications were developed as part of this work and are discussed here: DoppelLab, an immersive, cross-reality browsing environment for sensor network data; Flurry, an installation that composites video from multiple sources throughout a building in real time, to create an interactive and incorporative view of activity; and Tracking Risk with Ubiquitous Smart Sensing (TRUSS), an ongoing research effort aimed at applying real-time sensing, sensor fusion, and interactive visual analytic interfaces to construction site safety and decision support. Another project in active development, called the Disappearing Act, allows users to remove themselves from a set of live video streams using wearable sensor tags. Though these examples may seem disconnected, they share underlying technologies and research developments, as well as a common set of design principles, which are elucidated in this thesis. Building on developments in sensor networks, computer vision, and graphics, this work aims to create interfaces and visualizations that fuse perspectives, broaden contextual understanding, and encourage exploration of real-time sensor network data.by Gershon Dublon.S.M

    EMI Spy: Harnessing electromagnetic interference for low-cost, rapid prototyping of proxemic interaction

    Get PDF
    We present a wearable system that uses ambient electromagnetic interference (EMI) as a signature to identify electronic devices and support proxemic interaction. We designed a low cost tool, called EMI Spy, and a software environment for rapid deployment and evaluation of ambient EMI-based interactive infrastructure. EMI Spy captures electromagnetic interference and delivers the signal to a user's mobile device or PC through either the device's wired audio input or wirelessly using Bluetooth. The wireless version can be worn on the wrist, communicating with the user;s mobile device in their pocket. Users are able to train the system in less than 1 second to uniquely identify displays in a 2-m radius around them, as well as to detect pointing at a distance and touching gestures on the displays in real-time. The combination of a low cost EMI logger and an open source machine learning tool kit allows developers to quickly prototype proxemic, touch-to-connect, and gestural interaction. We demonstrate the feasibility of mobile, EMI-based device and gesture recognition with preliminary user studies in 3 scenarios, achieving 96% classification accuracy at close range for 6 digital signage displays distributed throughout a building, and 90% accuracy in classifying pointing gestures at neighboring desktop LCD displays. We were able to distinguish 1- and 2-finger touching with perfect accuracy and show indications of a way to determine power consumption of a device via touch. Our system is particularly well-suited to temporary use in a public space, where the sensors could be distributed to support a popup interactive environment anywhere with electronic devices. By designing for low cost, mobile, flexible, and infrastructure-free deployment, we aim to enable a host of new proxemic interfaces to existing appliances and displays

    TRUSS: Tracking Risk with Ubiquitous Smart Sensing

    Get PDF
    We present TRUSS, or Tracking Risk with Ubiquitous Smart Sensing, a novel system that infers and renders safety context on construction sites by fusing data from wearable devices, distributed sensing infrastructure, and video. Wearables stream real-time levels of dangerous gases, dust, noise, light quality, altitude, and motion to base stations that synchronize the mobile devices, monitor the environment, and capture video. At the same time, low-power video collection and processing nodes track the workers as they move through the view of the cameras, identifying the tracks using information from the sensors. These processes together connect the context-mining wearable sensors to the video; information derived from the sensor data is used to highlight salient elements in the video stream. The augmented stream in turn provides users with better understanding of real-time risks, and supports informed decision-making. We tested our system in an initial deployment on an active construction site.Intel CorporationMassachusetts Institute of Technology. Media LaboratoryEni S.p.A. (Firm

    Technologies for new perceptual sensibilities

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2018.Cataloged from PDF version of thesis.Includes bibliographical references (pages 151-160).When we listen closely, there is a pervading sense that we could hear more if we could only focus a little more intently. Our own perceptual limits are a moving target that we cannot delineate and rarely reach. This dissertation introduces technologies that operate at that mysterious boundary. I envision sensor(y) landscapes, physical sites that meld distributed sensing and sensory perception to afford new perceptual sensibilities. Today's mainstream technologies are well designed for rapid consumption of information and linear, sequential action. A side effect of their effectiveness to task, however, is a loss of undirected, curiosity-driven exploration in the world. I propose alternative technologies that would extend perceptual presence, amplify attention, and leverage intuitions. My focus is on turning rich sensor data into compelling sensory input, and as such, a substantial component of my work involved deploying sensor infrastructure in beautiful places. My projects center on a wetland restoration site, called Tidmarsh, where environmental data are densely and continuously collected and streamed. Using sound and vibration as the medium and nature as the setting, I undertook this work in two steps. The first constructs environments suffused with sensing and built for being present in. My projects in this space comprise sensor-driven virtual worlds, glass elevator sound installations, and vibrating forests that give oral histories. Building on lessons and infrastructure from the first approach, my culminating work uses non-occluding spatial audio to create situated perceptions of data. I developed a bone-conduction headphone device, called HearThere, that renders a live soundscape from distributed microphones and sensors, fully merged with the user's natural hearing. HearThere combines its wearer's inferred listening state with classification output from an Al engine to adjust the mix and spatial parameters of virtual audio sources. The device was developed based on findings from lab studies into spatial hearing and attention, and evaluated in a human subjects study with a panel of experts. Through these projects, I found that deriving meaning in the medium is a matter of possessing or developing perceptual sensibilities, intuitions for how the permeated data can be teased out and contemplated. Carefully composed perceptual confusion-a blurring of place and distributed media-becomes an opportunity for the development of new transpresence sensibilities. How do users make sense of these new dimensions of perception, and how can technologies be designed to facilitate perceptual sense-making?by Gershon Dublon.Ph. D

    ListenTree: Audio-Haptic Display in the Natural Environment

    Get PDF
    Presented at the 20th International Conference on Auditory Display (ICAD2014), June 22-25, 2014, New York, NY.In this paper, we present ListenTree, an audio-haptic display embedded in the natural environment. A visitor to our installation notices a faint sound appearing to emerge from a tree, and might feel a slight vibration under their feet as they approach. By resting their head against the tree, they are able to hear sound through bone conduction. To create this effect, an audio exciter transducer is weatherproofed and attached to the tree trunk underground, transforming the tree into a living speaker that channels audio through its branches. Any source of sound can be played through the tree, including live audio or pre-recorded tracks. For example, we used the ListenTree to display live streaming sound from an outdoor ecological monitoring sensor network, bringing an urban audience into contact with a faraway wetland. Our intervention is motivated by a need for forms of display that fade into the background, inviting attention rather than requiring it. ListenTree points to a future where digital information might become a seamless part of the physical world

    Networked Sensory Prosthetics Through Auditory Augmented Reality

    No full text
    © 2016 ACM. In this paper we present a vision for scalable indoor and outdoor auditory augmented reality (AAR), as well as HearThere, a wearable device and infrastructure demonstrating the feasibility of that vision. HearThere preserves the spatial alignment between virtual audio sources and the user's environment, using head tracking and bone conduction headphones to achieve seamless mixing of real and virtual sounds. To scale between indoor, urban, and natural environments, our system supports multi-scale location tracking, using finegrained (20-cm) Ultra-WideBand (UWB) radio tracking when in range of our infrastructure anchors and mobile GPS otherwise. In our tests, users were able to navigate through an AAR scene and pinpoint audio source locations down to 1m. We found that bone conduction is a viable technology for producing realistic spatial sound, and show that users' audio localization ability is considerably better in UWB coverage zones than with GPS alone. HearThere is a major step towards realizing our vision of networked sensory prosthetics, in which sensor networks serve as collective sensory extensions into the world around us. In our vision, AAR would be used to mix spatialized data sonification with distributed, livestreaming microphones. In this concept, HearThere promises a more expansive perceptual world, or umwelt, where sensor data becomes immediately attributable to extrinsic phenomena, externalized in the wearer's perception. We are motivated by two goals: first, to remedy a fractured state of attention caused by existing mobile and wearable technologies; and second, to bring the distant or often invisible processes underpinning a complex natural environment more directly into human consciousness

    PEM-ID: Identifying People by Gait-Matching using Cameras and Wearable Accelerometers

    No full text
    Abstract—The ability to localize and identify multiple people is paramount to the inference of high-level activities for informed decision-making. In this paper, we describe the PEM-ID system, which uniquely identifies people tagged with accelerometer nodes in the video output of preinstalled infrastructure cameras. For this, we introduce a new distance measure between signals comprised of timestamps of gait landmarks, and utilize it to identify each tracked person from the video by pairing them with a wearable accelerometer node. I
    corecore